21 research outputs found

    Reading Your Own Mind: Dynamic Visualization of Real-Time Neural Signals

    Get PDF
    Brain Computer Interfaces: BCI) systems which allow humans to control external devices directly from brain activity, are becoming increasingly popular due to dramatic advances in the ability to both capture and interpret brain signals. Further advancing BCI systems is a compelling goal both because of the neurophysiology insights gained from deriving a control signal from brain activity and because of the potential for direct brain control of external devices in applications such as brain injury recovery, human prosthetics, and robotics. The dynamic and adaptive nature of the brain makes it difficult to create classifiers or control systems that will remain effective over time. However it is precisely these qualities that offer the potential to use feedback to build on simple features and create complex control features that are robust over time. This dissertation presents work that addresses these opportunities for the specific case of Electrocorticography: ECoG) recordings from clinical epilepsy patients. First, queued patient tasks were used to explore the predictive nature of both local and global features of the ECoG signal. Second, an algorithm was developed and tested for estimating the most informative features from naive observations of ECoG signal. Third, a software system was built and tested that facilitates real-time visualizations of ECoG signal patients and allows ECoG epilepsy patients to engage in an interactive BCI control feature screening process

    Nonuniform high-gamma (60-500 Hz) power changes dissociate cognitive task and anatomy in human cortex

    Get PDF
    High-gamma-band (\u3e60 Hz) power changes in cortical electrophysiology are a reliable indicator of focal, event-related cortical activity. Despite discoveries of oscillatory subthreshold and synchronous suprathreshold activity at the cellular level, there is an increasingly popular view that high-gamma-band amplitude changes recorded from cellular ensembles are the result of asynchronous firing activity that yields wideband and uniform power increases. Others have demonstrated independence of power changes in the low- and high-gamma bands, but to date, no studies have shown evidence of any such independence above 60 Hz. Based on nonuniformities in time-frequency analyses of electrocorticographic (ECoG) signals, we hypothesized that induced high-gamma-band (60-500 Hz) power changes are more heterogeneous than currently understood. Using single-word repetition tasks in six human subjects, we showed that functional responsiveness of different ECoG high-gamma sub-bands can discriminate cognitive task (e.g., hearing, reading, speaking) and cortical locations. Power changes in these sub-bands of the high-gamma range are consistently present within single trials and have statistically different time courses within the trial structure. Moreover, when consolidated across all subjects within three task-relevant anatomic regions (sensorimotor, Broca\u27s area, and superior temporal gyrus), these behavior- and location-dependent power changes evidenced nonuniform trends across the population. Together, the independence and nonuniformity of power changes across a broad range of frequencies suggest that a new approach to evaluating high-gamma-band cortical activity is necessary. These findings show that in addition to time and location, frequency is another fundamental dimension of high-gamma dynamics

    GridLoc : An automatic and unsupervised localization method for high-density ECoG grids

    No full text
    Precise localization of electrodes is essential in the field of high-density (HD) electrocorticography (ECoG) brain signal analysis in order to accurately interpret the recorded activity in relation to functional anatomy. Current localization methods for subchronically implanted HD electrode grids involve post-operative imaging. However, for situations where post-operative imaging is not available, such as during acute measurements in awake surgery, electrode localization is complicated. Intra-operative photographs may be informative, but not for electrode grids positioned partially or fully under the skull. Here we present an automatic and unsupervised method to localize HD electrode grids that does not require post-operative imaging. The localization method, named GridLoc, is based on the hypothesis that the anatomical and vascular brain structures under the ECoG electrodes have an effect on the amplitude of the recorded ECoG signal. More specifically, we hypothesize that the spatial match between resting-state high-frequency band power (45–120 Hz) patterns over the grid and the anatomical features of the brain under the electrodes, such as the presence of sulci and larger blood vessels, can be used for adequate HD grid localization. We validate this hypothesis and compare the GridLoc results with electrode locations determined with post-operative imaging and/or photographs in 8 patients implanted with HD-ECoG grids. Locations agreed with an average difference of 1.94 ± 0.11 mm, which is comparable to differences reported earlier between post-operative imaging and photograph methods. The results suggest that resting-state high-frequency band activity can be used for accurate localization of HD grid electrodes on a pre-operative MRI scan and that GridLoc provides a convenient alternative to methods that rely on post-operative imaging or intra-operative photographs

    Neural tuning to low-level features of speech throughout the perisylvian cortex

    No full text
    Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain

    Neural tuning to low-level features of speech throughout the perisylvian cortex

    No full text
    Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain

    Brain-optimized extraction of complex sound features that drive continuous auditory perception.

    No full text
    Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match the neural representation of sound. Here, we postulate that constructing a data-driven neural model of auditory perception, with a minimum of theoretical assumptions about the relevant sound features, could provide an alternative approach and possibly a better match to the neural responses. We collected electrocorticography recordings from six patients who watched a long-duration feature film. The raw movie soundtrack was used to train an artificial neural network model to predict the associated neural responses. The model achieved high prediction accuracy and generalized well to a second dataset, where new participants watched a different film. The extracted bottom-up features captured acoustic properties that were specific to the type of sound and were associated with various response latency profiles and distinct cortical distributions. Specifically, several features encoded speech-related acoustic properties with some features exhibiting shorter latency profiles (associated with responses in posterior perisylvian cortex) and others exhibiting longer latency profiles (associated with responses in anterior perisylvian cortex). Our results support and extend the current view on speech perception by demonstrating the presence of temporal hierarchies in the perisylvian cortex and involvement of cortical sites outside of this region during audiovisual speech perception

    Decoding hand gestures from primary somatosensory cortex using high-density ECoG

    No full text
    Electrocorticography (ECoG) based Brain-Computer Interfaces (BCIs) have been proposed as a way to restore and replace motor function or communication in severely paralyzed people. To date, most motor-based BCIs have either focused on the sensorimotor cortex as a whole or on the primary motor cortex (M1) as a source of signals for this purpose. Still, target areas for BCI are not confined to M1, and more brain regions may provide suitable BCI control signals. A logical candidate is the primary somatosensory cortex (S1), which not only shares similar somatotopic organization to M1, but also has been suggested to have a role beyond sensory feedback during movement execution. Here, we investigated whether four complex hand gestures, taken from the American sign language alphabet, can be decoded exclusively from S1 using both spatial and temporal information. For decoding, we used the signal recorded from a small patch of cortex with subdural high-density (HD) grids in five patients with intractable epilepsy. Notably, we introduce a new method of trial alignment based on the increase of the electrophysiological response, which virtually eliminates the confounding effects of systematic and non-systematic temporal differences within and between gestures execution. Results show that S1 classification scores are high (76%), similar to those obtained from M1 (74%) and sensorimotor cortex as a whole (85%), and significantly above chance level (25%). We conclude that S1 offers characteristic spatiotemporal neuronal activation patterns that are discriminative between gestures, and that it is possible to decode gestures with high accuracy from a very small patch of cortex using subdurally implanted HD grids. The feasibility of decoding hand gestures using HD-ECoG grids encourages further investigation of implantable BCI systems for direct interaction between the brain and external devices with multiple degrees of freedom

    Decoding hand gestures from primary somatosensory cortex using high-density ECoG

    No full text
    Electrocorticography (ECoG) based Brain-Computer Interfaces (BCIs) have been proposed as a way to restore and replace motor function or communication in severely paralyzed people. To date, most motor-based BCIs have either focused on the sensorimotor cortex as a whole or on the primary motor cortex (M1) as a source of signals for this purpose. Still, target areas for BCI are not confined to M1, and more brain regions may provide suitable BCI control signals. A logical candidate is the primary somatosensory cortex (S1), which not only shares similar somatotopic organization to M1, but also has been suggested to have a role beyond sensory feedback during movement execution. Here, we investigated whether four complex hand gestures, taken from the American sign language alphabet, can be decoded exclusively from S1 using both spatial and temporal information. For decoding, we used the signal recorded from a small patch of cortex with subdural high-density (HD) grids in five patients with intractable epilepsy. Notably, we introduce a new method of trial alignment based on the increase of the electrophysiological response, which virtually eliminates the confounding effects of systematic and non-systematic temporal differences within and between gestures execution. Results show that S1 classification scores are high (76%), similar to those obtained from M1 (74%) and sensorimotor cortex as a whole (85%), and significantly above chance level (25%). We conclude that S1 offers characteristic spatiotemporal neuronal activation patterns that are discriminative between gestures, and that it is possible to decode gestures with high accuracy from a very small patch of cortex using subdurally implanted HD grids. The feasibility of decoding hand gestures using HD-ECoG grids encourages further investigation of implantable BCI systems for direct interaction between the brain and external devices with multiple degrees of freedom

    High-frequency band temporal dynamics in response to a grasp force task

    No full text
    OBJECTIVE: Brain-computer interfaces (BCIs) are being developed to restore reach and grasping movements of paralyzed individuals. Recent studies have shown that the kinetics of grasping movement, such as grasp force, can be successfully decoded from electrocorticography (ECoG) signals, and that the high-frequency band (HFB) power changes provide discriminative information that contribute to an accurate decoding of grasp force profiles. However, as the models used in these studies contained simultaneous information from multiple spectral features over multiple areas in the brain, it remains unclear what parameters of movement and force are encoded by the HFB signals and how these are represented temporally and spatially in the SMC. APPROACH: To investigate this, and to gain insight in the temporal dynamics of the HFB during grasping, we continuously modelled the ECoG HFB response recorded from nine individuals with epilepsy temporarily implanted with ECoG grids, who performed three different grasp force tasks. MAIN RESULTS: We show that a model based on the force onset and offset consistently provides a better fit to the HFB power responses when compared with a model based on the force magnitude, irrespective of electrode location. SIGNIFICANCE: Our results suggest that HFB power, although potentially useful for continuous decoding, is more closely related to the changes in movement. This finding may potentially contribute to the more natural decoding of grasping movement in neural prosthetics

    High-frequency band temporal dynamics in response to a grasp force task

    No full text
    OBJECTIVE: Brain-computer interfaces (BCIs) are being developed to restore reach and grasping movements of paralyzed individuals. Recent studies have shown that the kinetics of grasping movement, such as grasp force, can be successfully decoded from electrocorticography (ECoG) signals, and that the high-frequency band (HFB) power changes provide discriminative information that contribute to an accurate decoding of grasp force profiles. However, as the models used in these studies contained simultaneous information from multiple spectral features over multiple areas in the brain, it remains unclear what parameters of movement and force are encoded by the HFB signals and how these are represented temporally and spatially in the SMC. APPROACH: To investigate this, and to gain insight in the temporal dynamics of the HFB during grasping, we continuously modelled the ECoG HFB response recorded from nine individuals with epilepsy temporarily implanted with ECoG grids, who performed three different grasp force tasks. MAIN RESULTS: We show that a model based on the force onset and offset consistently provides a better fit to the HFB power responses when compared with a model based on the force magnitude, irrespective of electrode location. SIGNIFICANCE: Our results suggest that HFB power, although potentially useful for continuous decoding, is more closely related to the changes in movement. This finding may potentially contribute to the more natural decoding of grasping movement in neural prosthetics
    corecore